Web Survey Bibliography
Survey research has historically relied on a probabilistic model to underlie its sampling frame. With rare exception online research is non
‐probabilistic. Research without the safety net of a probabilistic frame raises all kinds of alarms. Challenges as to the reliability of online research has become a growing crescendo as the ‐probabilistic nature of online research has become evident. However, not all sampling frames must be probabilistic. Unfortunately, no such standard metrics exist to track reliability in online sampling. In fact, whether they are access panels or social networks there are no standardized means of balancing panels or even comparing them. To confound the situation the commercially used convenience panels are vastly different from each other (Gittelman and Trimarchi, CASRO Panel Conference, February 2009, paper available). These differences are so far reaching that those who elect to use these sample sources are not only without a safety net, they are at considerable professional risk. We have completed analysis of eighteen American panels and have found that respondent aging, frequency of professional responders, other satisficing behaviors as well as dramatic differences between sociologic, psychographic and buying behavior segmentations make for a cacophony of differences seemingly impossible to correct. ‐panel comparisons themselves are rare with data from a very few having been presented on any scale. ‐liners, invalids, inconsistencies, etc.] for which we have developed standard quantitative measures, and (2) a mechanism for developing a family of sampling standards based upon segmentation by key variables such as, but not limited to, media, purchasing and psychographics. It is the new availability of global data that allows us to present universal standards that help us meet necessary requirements that are our focus in this conference. In addition, to measuring performance, we believe that there are three key requirements for standard panel metrics including: (1) the ability to capture panel performance variations consistent with the differing needs of sample users, (i.e. a broadcasting company might wish to anchor its sampling frame to media segments); (2) The ability to create a data base that is retrospective in that new sample sources can be added to the database without repeating the analysis and (3) a focus on indices that are pragmatic in their measure (i.e. We always view buying behavior as the most pragmatic.)
non
In this study we will present the results of an extensive global study covering forty countries. Within each country panels will be compared using a 17 minute questionnaire, 400 completes per panel. We hope to present five or more providers per market. No such extensive comparison has been done on a global basis. In fact, inter
Preliminary data (24000 interviews) shows evolutionary trends in convenience panel development. Between panel differences appear more extreme in the United States than in other markets.
We are proposing two sets of practices: (1) using panel performance metrics [professionals, speeders, straight
In this talk, we propose to use segmentation analysis as a new metric that will allow us to anchor online data in a new non probabilistic sampling frame. It is the existence of global data that gives us a rare opportunity to experiment with this new methodology. Our goal is to use segmentation in each country to create a fingerprint that can be consistently maintained by blending panels. By minimizing the variability from the segments through optimization and panel combination we will establish a means for stabilizing online data irrespective of the panels and sourcing modes from which they draw their origin. We cannot stabilize online data unless we provide it with a reference point to anchor itself; the segments are that anchor. As the sourcing models continue to shift, panels will age and shift with them; we need a reliable anchor that rises above these problems. It is essential that we explore tools to measure these changes. Without a means of comparison we cannot expect to measure drift nor can we expect to have a platform for predicting the future. We do not profess to be on the road to a new probabilistic framework but rather a platform for comparison and continuity. We believe that there is a theoretical population online that can serve this purpose. Using the database we have gathered that includes respondents from over 160 global panels (64,000 interviews) distributed among 40 global markets we shall introduce new methods to build “perspective”.
Based on this we will use our segmentation models as a means of creating a “convenience” sampling frame by averaging segments into a “Grand Mean.” Using optimization models we will select convenience panels that best reflect the grand mean and the proportions by which they best fit together. We shall give evidence for the efficiency of these strategies.
Conference homepage (abstract)
Web survey bibliography (431)
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Millennials and emojis in Spain and Mexico.; 2017; Bosch Jover, O.; Revilla, M.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Do distractions during web survey completion affect data quality? Findings from a laboratory experiment...; 2017; Wenz, A.
- A Comparison of Two Nonprobability Samples with Probability Samples; 2017; Zack, E. S.; Kennedy, J. M.
- Targeted letters: Effects on sample composition and item non-response; 2017; Bianchi, A.; Biffignandi, S.
- Oversampling as a methodological strategy for the study of self-reported health among lesbian, gay and...; 2017; Anderssen, N.; Malterud, K.
- Analyzing Survey Characteristics, Participation, and Evaluation Across 186 Surveys in an Online Opt-...; 2017; Revilla, M.
- Comparison of response patterns in different survey designs: a longitudinal panel with mixed-mode and...; 2017; Ruebsamen, N.; Akmatov, M. K.; Castell, S.; Karch, A.; Mikolajczyk, R. T.
- Determinants of polling accuracy: the effect of opt-in Internet surveys; 2017; Sohlberg, J.; Gilljam, M.; Martinsson, J.
- Article Establishing an Open Probability-Based Mixed-Mode Panel of the General Population in Germany...; 2017; Bosnjak, M.; Dannwolf, T.; Enderle, T.; Schaurer, I.; Struminskaya, B.; Tanner, A.; Weyandt, K.
- Effects of Mobile versus PC Web on Survey Response Quality: a Crossover Experiment in a Probability...; 2017; Antoun, C.; Couper, M. P.; G. G.Conrad, F. G.
- Impact of satisficing behavior in online surveys on consumer preference and welfare estimates; 2016; Gao, Z.; House, L. A.; Bi, X.
- Comparing Twitter and Online Panels for Survey Recruitment of E-Cigarette Users and Smokers; 2016; Guillory, J.; Kim, A.; Murphy, J.; Bradfield, B.; Nonnemaker, J.; Hsieh, Y. P.
- Targeted Appeals for Participation in Letters to Panel Survey Members; 2016; Lynn, P.
- Motivated Misreporting in Web Panels; 2016; Bach, R.; Eckman, S.
- Using official surveys to reduce bias of estimates from nonrandom samples collected by web surveys; 2016; Beresovsky, V.; Dorfman, A.; Rumcheva, P.
- A Feasibility Study of Recruiting and Maintaining a Web Panel of People with Disabilities; 2016; Chandler, J.
- Inferences from Internet Panel Studies and Comparisons with Probability Samples; 2016; Lachan, R.; Boyle, J.; Harding, R.
- Exploring the Gig Economy Using a Web-Based Survey: Measuring the Online 'and' Offline Side...; 2016; Robles, B. J.; McGee, M.
- Comparing data quality between online panel and intercept samples; 2016; Liu, M.
- Integration of a phone-based household travel survey and a web-based student travel survey; 2016; Verreault, H.; Morency, C.
- Are Final Comments in Web Survey Panels Associated with Next-Wave Attrition?; 2016; McLauchlan, C.; Schonlau, M.
- Estimation and Adjustment of Self-Selection Bias in Volunteer Panel Web Surveys ; 2016; Niu, Ch.
- Participation in an Intensive Longitudinal Study with Weekly Web Surveys Over 2.5 Years; 2016; Barber, J. S.; Kusunoki, Y.; Gatny, H. H.; Schulz, P.
- The impact of survey duration on completion rates among Millennial respondents ; 2016; Coates, D.; Bliss, M.; Vivar, X.
- Cognitive Probing Methods in Usability Testing – Pros and Cons; 2016; Nichols, E. M.
- Assessing the Accuracy of 51 Nonprobability Online Panels and River Samples: A Study of the Advertising...; 2016; Yang,Y.;Callegaro,M.;Yang,Y.;Callegaro,M.;Chin,K.;Yang,Y.;Villar,A.;Callegaro, M.; Chin, K.; Krosnick...
- Calculating Standard Errors for Nonprobability Samples when Matching to Probability Samples ; 2016; Lee, Ad.; ZuWallack, R. S.
- User Experience and Eye-tracking: Results to Optimize Completion of a Web Survey and Website Design ; 2016; Walton, L.; Ricci, K.; Libman Barry, A.; Eiginger, C.; Christian, L. M.
- Using Web Panels to Quantify the Qualitative: The National Center for Health Statistics Research and...; 2016; Scanlon, P. J.
- Does Changing Monetary Incentive Schemes in Panel Studies Affect Cooperation? A Quasi-experiment on...; 2016; Schaurer, I.; Bosnjak, M.
- Web Probing for Question Evaluation: The Effects of Probe Placement ; 2016; Fowler, S.; Willis, G. B.; Moser, R. P.; Townsend, R. L. M.; Maitland, A.; Sun, H.; Berrigan, D.
- Using Cash Incentives to Help Recruitment in a Probability Based Web Panel: The Effects on Sign Up Rates...; 2016; Krieger, U.
- Making Connections on the Internet: Online Survey Panel Communications ; 2016; Libman Barry, A.; Eiginger, C.; Walton, L.; Ricci, K.
- Evaluating a Modular Design Approach to Collecting Survey Data Using Text Messages ; 2016; West, B. T.; Ghimire, D.; Axinn, W.
- Safety First: Ensuring the Anonymity and Privacy of Iranian Panellists’ While Creating Iran...; 2016; Farmanesh, A.; Mohseni, E.
- Tracking the Representativeness of an Online Panel Over Time ; 2016; Klausch, L. T.; Scherpenzeel, A.
- Non-Observation Bias in an Address-Register-Based CATI/CAPI Mixed Mode Survey; 2016; Lipps, O.
- Bees to Honey or Flies to Manure? How the Usual Subject Recruitment Exacerbates the Shortcomings of...; 2016; Snell, S. A., Hillygus, D. S.
- Thinking Inside the Box Visual Design of the Response Box Affects Creative Divergent Thinking in an...; 2016; Mohr, A. H.; Sell, A.; Lindsay, T.
- Establishing the accuracy of online panels for survey research; 2016; Bruggen, E.; van den Brakel, J.; Krosnick, J. A.
- Adaptive survey designs to minimize survey mode effects – a case study on the Dutch Labor Force...; 2016; Calinescu, M.; Schouten, B.
- What is the gain in a probability-based online panel to provide Internet access to sampling units that...; 2016; Revilla, M.; Cornilleau, A.; Cousteaux, A-S.; Legleye, S; de Pedraza, P.
- Representative web-survey!; 2016; Linde, P.
- The Utility of an Online Convenience Panel for Reaching Rare and Dispersed Populations; 2016; Sell, R.; Goldberg, S.; Conron, K.
- Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk; 2016; Berinsky, A.; Huber, G. A.; Lenz, G. S.
- Setting Up an Online Panel Representative of the General Population The German Internet Panel; 2016; Blom, A. G.; Gathmann, C.; Krieger, U.
- Reducing Underreports of Behaviors in Retrospective Surveys: The Effects of Three Different Strategies...; 2016; Lugtig, P. J.; Glasner, T.; Boeve, A.
- Dropouts in Longitudinal Surveys; 2016; Lugtig, P. J.; De Leeuw, E. D.